|
Matrix chain multiplication (or Matrix Chain Ordering Problem, MCOP) is an optimization problem that can be solved using dynamic programming. Given a sequence of matrices, the goal is to find the most efficient way to multiply these matrices. The problem is not actually to ''perform'' the multiplications, but merely to decide the sequence of the matrix multiplications involved. We have many options because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result obtained will remain the same. For example, if we had four matrices ''A'', ''B'', ''C'', and ''D'', we would have: :((''AB'')''C'')''D'' = ((''A''(''BC''))''D'') = (''AB'')(''CD'') = ''A''((''BC'')''D'') = ''A''(''B''(''CD'')). However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the ''efficiency''. For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then, :(''AB'')''C'' = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations :''A''(''BC'') = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations. Clearly the first method is more efficient. With this information, the problem statement can be refined, how do we determine the optimal parenthesization of a product of ''n'' matrices? We could go through each possible parenthesization (brute force), requiring a run-time that is exponential in the number of matrices, which is very slow and impractical for large ''n''. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems. By solving subproblems one time and reusing these solutions, we can drastically reduce the run-time required. This concept is known as dynamic programming. == A Dynamic Programming Algorithm == To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations, needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm: * Take the sequence of matrices and separate it into two subsequences. * Find the minimum cost of multiplying out each subsequence. * Add these costs together, and add in the cost of multiplying the two result matrices. * Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them. For example, if we have four matrices ''ABCD'', we compute the cost required to find each of (''A'')(''BCD''), (''AB'')(''CD''), and (''ABC'')(''D''), making recursive calls to find the minimum cost to compute ''ABC'', ''AB'', ''CD'', and ''BCD''. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor. However, if we implement this algorithm we discover that it is just as slow as the naive way of trying all permutations! What went wrong? The answer is that we're doing a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ''ABC'' and ''AB''. But finding the best cost for computing ABC also requires finding the best cost for ''AB''. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs. One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about ''n''2/2 different subsequences, where ''n'' is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(''n''3) from O(2''n''), which is more than efficient enough for real applications. This is top-down dynamic programming. From 〔 〕 Pseudocode: // Matrix Ai has dimension p() x p() for i = 1..n MatrixChainOrder(int p = Minimum number of scalar multiplications (i.e., cost) // needed to compute the matrix A()A()...A() = A() // cost is zero when multiplying one matrix for (i = 1; i <= n; i++) m() = 0; for (L=2; L<=n; L++) } } } *Note : The first index for p is 0 and the first index for m and s is 1 Another solution is to anticipate which costs we will need and precompute them. It works like this: * For each ''k'' from 2 to ''n'', the number of matrices: * * Compute the minimum costs of each subsequence of length ''k'', using the costs already computed. The code in java using zero based array indexes along with a convenience method for printing the solved order of operations: public class MatrixOrderOptimization } } } public void printOptimalParenthesizations() } } At the end of this program, we have the minimum cost for the full sequence. Although, this algorithm requires O(''n''3) time, this approach has practical advantages that it requires no recursion, no testing if a value has already been computed, and we can save space by throwing away some of the subresults that are no longer required. This is bottom-up dynamic programming: a second way by which this problem can be solved. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Matrix chain multiplication」の詳細全文を読む スポンサード リンク
|